27 research outputs found
A numerical comparison of solvers for large-scale, continuous-time algebraic Riccati equations and LQR problems
In this paper, we discuss numerical methods for solving large-scale
continuous-time algebraic Riccati equations. These methods have been the focus
of intensive research in recent years, and significant progress has been made
in both the theoretical understanding and efficient implementation of various
competing algorithms. There are several goals of this manuscript: first, to
gather in one place an overview of different approaches for solving large-scale
Riccati equations, and to point to the recent advances in each of them. Second,
to analyze and compare the main computational ingredients of these algorithms,
to detect their strong points and their potential bottlenecks. And finally, to
compare the effective implementations of all methods on a set of relevant
benchmark examples, giving an indication of their relative performance
Greedy low-rank algorithm for spatial connectome regression
Recovering brain connectivity from tract tracing data is an important
computational problem in the neurosciences. Mesoscopic connectome
reconstruction was previously formulated as a structured matrix regression
problem (Harris et al., 2016), but existing techniques do not scale to the
whole-brain setting. The corresponding matrix equation is challenging to solve
due to large scale, ill-conditioning, and a general form that lacks a
convergent splitting. We propose a greedy low-rank algorithm for connectome
reconstruction problem in very high dimensions. The algorithm approximates the
solution by a sequence of rank-one updates which exploit the sparse and
positive definite problem structure. This algorithm was described previously
(Kressner and Sirkovi\'c, 2015) but never implemented for this connectome
problem, leading to a number of challenges. We have had to design judicious
stopping criteria and employ efficient solvers for the three main sub-problems
of the algorithm, including an efficient GPU implementation that alleviates the
main bottleneck for large datasets. The performance of the method is evaluated
on three examples: an artificial "toy" dataset and two whole-cortex instances
using data from the Allen Mouse Brain Connectivity Atlas. We find that the
method is significantly faster than previous methods and that moderate ranks
offer good approximation. This speedup allows for the estimation of
increasingly large-scale connectomes across taxa as these data become available
from tracing experiments. The data and code are available online
Recommended from our members
An H2-type error bound for time-limited balanced truncation
When solving partial differential equations numerically, usually a high
order spatial discretization is needed. Model order reduction (MOR)
techniques are often used to reduce the order of spatially-discretized
systems and hence reduce computational complexity. A particular MOR technique
to obtain a reduced order model (ROM) is balanced truncation (BT). However,
if one aims at finding a good ROM on a certain finite time interval only,
time-limited BT (TLBT) can be a more accurate alternative. So far, no error
bound on TLBT has been proved. In this paper, we close this gap in the theory
by providing an H2 error bound for TLBT with two different representations.
The performance of the error bound is then shown in several numerical
experiments
An -type error bound for time-limited balanced truncation
When solving partial differential equations numerically, usually a high order spatial discretization is needed. Model order reduction (MOR) techniques are often used to reduce the order of spatially-discretized systems and hence reduce computational complexity. A particular MOR technique to obtain a reduced order model (ROM) is balanced truncation (BT). However, if one aims at finding a good ROM on a certain finite time interval only, time-limited BT (TLBT) can be a more accurate alternative. So far, no error bound on TLBT has been proved. In this paper, we close this gap in the theory by providing an H2 error bound for TLBT with two different representations. The performance of the error bound is then shown in several numerical experiment